Semicircular Distribution (Wigner semicircle)#
The semicircular distribution is a continuous, compact-support distribution whose density is proportional to the height of a semicircle.
In canonical form it lives on ([-1,1]) with pdf
[ f(x) = \frac{2}{\pi},\sqrt{1-x^2},\qquad |x|\le 1. ]
It appears naturally as:
the x-coordinate of a point chosen uniformly at random in a disk (a geometric construction)
the limiting eigenvalue distribution in the Wigner semicircle law (random matrix theory / free probability)
a smooth bounded alternative to uniform/triangular priors when you want more mass in the middle and zero density at the endpoints
Learning goals#
By the end, you should be able to:
write down and interpret the pdf/cdf in standard and
(loc, scale)formcompute mean/variance/skewness/kurtosis and understand the characteristic function
derive the likelihood, including its support constraint
sample it efficiently using NumPy only (uniform disk construction)
visualize pdf/cdf and Monte Carlo samples
use
scipy.stats.semicircularforpdf,cdf,rvs, andfitsee the Wigner semicircle law in a small eigenvalue simulation
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import os
import plotly.io as pio
from scipy import special, stats
pio.templates.default = "plotly_white"
pio.renderers.default = os.environ.get("PLOTLY_RENDERER", "notebook")
np.set_printoptions(precision=6, suppress=True)
rng = np.random.default_rng(42)
1) Title & Classification#
Distribution name: semicircular
Type: continuous
Support:
Standard: (x \in [-1,1])
Location–scale: (x \in [\mu-R,,\mu+R])
Parameter space (SciPy):
loc = \mu \in \mathbb{R}scale = R > 0
We’ll use (\mu) (location/center) and (R) (scale/radius) throughout.
2) Intuition & Motivation#
What it models#
The semicircular distribution is bounded, symmetric, and unimodal:
maximal density at the center
density decreases smoothly to 0 at the endpoints
It’s a good model when a quantity is constrained to an interval and values near the center are more typical than values near the extremes.
A geometric construction (the key intuition)#
Pick a point ((U,V)) uniformly at random from the disk of radius (R) centered at ((0,0)). The marginal distribution of the x-coordinate (U) is semicircular.
Reason: for a fixed x, the set of disk points with that x is a vertical line segment of length
[ 2\sqrt{R^2-x^2}. ]
Under a uniform area distribution, larger cross-sections are more likely, so the x-marginal density is proportional to (\sqrt{R^2-x^2}), i.e. a semicircle.
Typical real-world use cases#
Geometry / kinematics: random 2D positions in a circular cross-section; projecting onto one axis.
Physics / networks: spectra of certain large random systems can be approximated by a semicircle.
Bayesian modeling: a smooth prior for a bounded parameter when you want central mass and vanishing density at boundaries.
Relations to other distributions#
If (X\sim\mathrm{Semicircular}(\mu,R)), then (\tfrac{X-(\mu-R)}{2R}\in[0,1]) has a (\mathrm{Beta}(\tfrac32,\tfrac32)) distribution.
Compare with the arcsine law on ([-1,1]): (f_{\text{arcsine}}(x)=\tfrac{1}{\pi\sqrt{1-x^2}}) (spikes at endpoints) vs (f_{\text{semi}}(x)=\tfrac{2}{\pi}\sqrt{1-x^2}) (zero at endpoints).
Random matrix theory: eigenvalues of a Wigner matrix (after scaling) converge to a semicircle with radius (2).
3) Formal Definition#
Canonical (standard) form#
A standard semicircular random variable (Y) has support ([-1,1]) and pdf
[ f_Y(y) = \frac{2}{\pi},\sqrt{1-y^2},\mathbf{1}{|y|\le 1}. ]
Location–scale form#
Let (\mu\in\mathbb{R}) and (R>0), and define
[ X = \mu + RY. ]
Then (X\sim\mathrm{Semicircular}(\mu,R)) with support ([\mu-R,\mu+R]) and pdf
[ f_X(x\mid\mu,R) = \frac{2}{\pi R},\sqrt{1-\Big(\frac{x-\mu}{R}\Big)^2};\mathbf{1}{|x-\mu|\le R}. ]
Equivalently,
[ f_X(x\mid\mu,R) = \frac{2}{\pi R^2},\sqrt{R^2-(x-\mu)^2};\mathbf{1}{|x-\mu|\le R}. ]
CDF#
For the standard form ((Y\in[-1,1])):
[ F_Y(y)= \begin{cases} 0, & y\le -1,\ \frac{1}{2} + \frac{y\sqrt{1-y^2}+\arcsin(y)}{\pi}, & -1<y<1,\ 1, & y\ge 1. \end{cases} ]
For (X=\mu+RY):
[ F_X(x\mid\mu,R) = F_Y!\left(\frac{x-\mu}{R}\right). ]
In SciPy this is stats.semicircular(loc=mu, scale=R).
def _check_loc_scale(loc: float, scale: float) -> None:
if not (np.isfinite(loc) and np.isfinite(scale) and scale > 0):
raise ValueError("Require finite loc and scale > 0.")
def semicircular_pdf(x, loc: float = 0.0, scale: float = 1.0):
'''Semicircular pdf. Vectorized over x.'''
_check_loc_scale(loc, scale)
x = np.asarray(x, dtype=float)
y = (x - loc) / scale
out = np.zeros_like(y, dtype=float)
inside = np.abs(y) <= 1.0
out[inside] = (2.0 / (np.pi * scale)) * np.sqrt(np.clip(1.0 - y[inside] ** 2, 0.0, None))
return out
def semicircular_cdf(x, loc: float = 0.0, scale: float = 1.0):
'''Semicircular cdf. Vectorized over x.'''
_check_loc_scale(loc, scale)
x = np.asarray(x, dtype=float)
y = (x - loc) / scale
out = np.zeros_like(y, dtype=float)
out[y >= 1.0] = 1.0
mid = (y > -1.0) & (y < 1.0)
ym = y[mid]
out[mid] = 0.5 + (ym * np.sqrt(1.0 - ym**2) + np.arcsin(ym)) / np.pi
return out
def semicircular_logpdf(x, loc: float = 0.0, scale: float = 1.0):
'''Log-pdf. Returns -inf outside the support and at the endpoints.'''
_check_loc_scale(loc, scale)
x = np.asarray(x, dtype=float)
y = (x - loc) / scale
out = np.full_like(y, -np.inf, dtype=float)
inside = np.abs(y) <= 1.0
y_in = y[inside]
out[inside] = np.log(2.0 / (np.pi * scale)) + 0.5 * np.log1p(-y_in**2)
return out
def semicircular_rvs(size, loc: float = 0.0, scale: float = 1.0, rng=None):
'''NumPy-only sampling via the "uniform disk" construction.'''
_check_loc_scale(loc, scale)
if rng is None:
rng = np.random.default_rng()
u = rng.random(size)
theta = rng.random(size) * (2.0 * np.pi)
r = np.sqrt(u) # makes (r, theta) uniform over disk area
return loc + scale * r * np.cos(theta)
4) Moments & Properties#
Let (X\sim\mathrm{Semicircular}(\mu,R)).
Mean, variance, skewness, kurtosis#
By symmetry around (\mu):
[ \mathbb{E}[X]=\mu,\qquad \text{skewness}=0. ]
The variance is
[ \mathrm{Var}(X)=\frac{R^2}{4}. ]
The excess kurtosis (kurtosis minus 3) is
[ \gamma_2 = -1\qquad\text{(so kurtosis is }2\text{)}. ]
More generally, all odd central moments are 0, and the even central moments are Catalan-number scaled:
[ \mathbb{E}[(X-\mu)^{2n}] = C_n\left(\frac{R^2}{4}\right)^n, \qquad C_n=\frac{1}{n+1}\binom{2n}{n}. ]
MGF and characteristic function#
Let (I_1) be the modified Bessel function (order 1) and (J_1) the Bessel function (order 1). Then
[ M_X(t)=\mathbb{E}[e^{tX}] = e^{\mu t},\frac{2 I_1(Rt)}{Rt},\qquad t\in\mathbb{R} ]
[ \varphi_X(t)=\mathbb{E}[e^{itX}] = e^{i\mu t},\frac{2 J_1(Rt)}{Rt},\qquad t\in\mathbb{R}. ]
(These are well-defined at (t=0) by taking limits: (\tfrac{2I_1(z)}{z}\to 1) and (\tfrac{2J_1(z)}{z}\to 1) as (z\to 0).)
Entropy (differential)#
Location doesn’t change differential entropy, and scaling adds (\log R). The closed form is
[ H(X)=\log(\pi R)-\frac{1}{2}. ]
from math import comb
def semicircular_mean(loc: float = 0.0, scale: float = 1.0) -> float:
_check_loc_scale(loc, scale)
return float(loc)
def semicircular_var(scale: float = 1.0) -> float:
_check_loc_scale(0.0, scale)
return float(scale**2 / 4.0)
def semicircular_excess_kurtosis() -> float:
return -1.0
def catalan_number(n: int) -> int:
if n < 0:
raise ValueError("n must be >= 0")
return comb(2 * n, n) // (n + 1)
def semicircular_central_moment_2n(n: int, scale: float = 1.0) -> float:
'''E[(X-μ)^(2n)] for the semicircular distribution.'''
_check_loc_scale(0.0, scale)
return float(catalan_number(n) * (scale**2 / 4.0) ** n)
def semicircular_mgf(t, loc: float = 0.0, scale: float = 1.0):
_check_loc_scale(loc, scale)
t = np.asarray(t, dtype=float)
z = scale * t
# Stable handling near z=0
ratio = np.where(np.abs(z) < 1e-12, 1.0 + z**2 / 8.0, 2.0 * special.i1(z) / z)
return np.exp(loc * t) * ratio
def semicircular_cf(t, loc: float = 0.0, scale: float = 1.0):
_check_loc_scale(loc, scale)
t = np.asarray(t, dtype=float)
z = scale * t
ratio = np.where(np.abs(z) < 1e-12, 1.0 - z**2 / 8.0, 2.0 * special.j1(z) / z)
return np.exp(1j * loc * t) * ratio
def semicircular_entropy(scale: float = 1.0) -> float:
_check_loc_scale(0.0, scale)
return float(np.log(np.pi * scale) - 0.5)
# Quick Monte Carlo sanity check
mu, R = 0.5, 2.0
x_mc = semicircular_rvs(250_000, loc=mu, scale=R, rng=rng)
print("MC mean :", x_mc.mean(), " | theory:", semicircular_mean(mu, R))
print("MC var :", x_mc.var(), " | theory:", semicircular_var(R))
print("MC skew :", stats.skew(x_mc), " | theory: 0")
print("MC ex-k :", stats.kurtosis(x_mc, fisher=True), " | theory:", semicircular_excess_kurtosis())
# Entropy: compare our closed form to SciPy
print("entropy formula:", semicircular_entropy(R))
print("entropy SciPy :", stats.semicircular.entropy(scale=R))
# Check the 4th central moment against Catalan formula (n=2)
mu4_mc = np.mean((x_mc - mu) ** 4)
mu4_th = semicircular_central_moment_2n(2, scale=R)
print("E[(X-μ)^4] MC:", mu4_mc, " | theory:", mu4_th)
MC mean : 0.49854065493504535 | theory: 0.5
MC var : 1.0025105699993875 | theory: 1.0
MC skew : 0.0002972765510045654 | theory: 0
MC ex-k : -1.0034177602258956 | theory: -1.0
entropy formula: 1.3378770664093453
entropy SciPy : 1.3378770664093453
E[(X-μ)^4] MC: 2.0066310114582215 | theory: 2.0
5) Parameter Interpretation#
The semicircular distribution is a location–scale family:
(\mu) shifts the distribution left/right and equals the mean.
(R) is half the support width (a radius) and sets the spread.
Key effects of (R):
Support expands: ([\mu-R,\mu+R])
Peak height decreases: (f(\mu)=\tfrac{2}{\pi R})
Variance grows quadratically: (\mathrm{Var}(X)=\tfrac{R^2}{4})
In standardized coordinates (Z=(X-\mu)/R), the shape is fixed on ([-1,1]).
scales = [0.5, 1.0, 2.0]
locs = [0.0, 1.0]
fig = go.Figure()
for loc in locs:
for scale in scales:
x = np.linspace(loc - scale, loc + scale, 700)
fig.add_trace(
go.Scatter(
x=x,
y=semicircular_pdf(x, loc=loc, scale=scale),
name=f"loc={loc:g}, scale={scale:g}",
)
)
fig.update_layout(
title="Semicircular pdf: how loc and scale change the curve",
xaxis_title="x",
yaxis_title="density",
)
fig.show()
6) Derivations#
6.1 Expectation and variance (from the disk construction)#
Use the geometric representation: let ((U,V)) be uniform on the disk of radius (R) centered at ((0,0)). Then the x-coordinate (U) has the semicircular distribution (center 0, radius (R)).
A clean way to generate a uniform disk point is polar coordinates:
(\Theta \sim \mathrm{Unif}(0,2\pi))
(S \sim \mathrm{Unif}(0,1))
radius (\rho = R\sqrt{S})
Then ((U,V)=(\rho\cos\Theta,\rho\sin\Theta)) is uniform on the disk.
So a semicircular draw can be written as
[ X-\mu = \rho\cos\Theta = R\sqrt{S}\cos\Theta. ]
Mean. Since (\mathbb{E}[\cos\Theta]=0),
[ \mathbb{E}[X]=\mu. ]
Variance. Using independence of (S) and (\Theta):
[ \mathrm{Var}(X)=\mathbb{E}[(X-\mu)^2] = R^2,\mathbb{E}[S],\mathbb{E}[\cos^2\Theta] = R^2\cdot\frac{1}{2}\cdot\frac{1}{2} = \frac{R^2}{4}. ]
6.2 Likelihood#
For i.i.d. data (x_1,\dots,x_n) from (\mathrm{Semicircular}(\mu,R)), define (y_i=(x_i-\mu)/R). The likelihood is
[ L(\mu,R)=\prod_{i=1}^n \frac{2}{\pi R},\sqrt{1-y_i^2};\mathbf{1}{|y_i|\le 1}. ]
Equivalently, the log-likelihood is
[ \ell(\mu,R)=n\log\Big(\frac{2}{\pi R}\Big) + \frac{1}{2}\sum_{i=1}^n \log(1-y_i^2), \quad\text{subject to }\max_i|x_i-\mu|\le R. ]
If the constraint is violated, (\ell(\mu,R)=-\infty).
Practical implication: parameter estimation is a constrained optimization problem because the support depends on ((\mu,R)). This is similar in spirit to fitting a uniform distribution: you must choose ((\mu,R)) so the interval covers the observed data.
7) Sampling & Simulation#
NumPy-only sampling via a uniform disk#
From the geometric construction:
Sample (S\sim\mathrm{Unif}(0,1)) and (\Theta\sim\mathrm{Unif}(0,2\pi))
Set (\rho = R\sqrt{S})
Return (X = \mu + \rho\cos\Theta)
This is exact and uses only uniform random numbers.
We already implemented this as semicircular_rvs.
# Sampling demo
x_demo = semicircular_rvs(10, loc=0.0, scale=1.0, rng=rng)
x_demo
array([ 0.560276, 0.157799, -0.942746, -0.7539 , -0.73464 , 0.511127,
-0.158642, 0.243663, -0.867028, -0.499481])
8) Visualization#
We’ll visualize:
the pdf
the cdf
Monte Carlo samples vs the theoretical pdf
mu, R = 0.0, 1.0
x = np.linspace(mu - R - 0.2, mu + R + 0.2, 1200)
pdf = semicircular_pdf(x, loc=mu, scale=R)
cdf = semicircular_cdf(x, loc=mu, scale=R)
fig_pdf = go.Figure(go.Scatter(x=x, y=pdf, name="pdf"))
fig_pdf.update_layout(title="Semicircular PDF (standard)", xaxis_title="x", yaxis_title="density")
fig_pdf.show()
fig_cdf = go.Figure(go.Scatter(x=x, y=cdf, name="cdf"))
fig_cdf.update_layout(title="Semicircular CDF (standard)", xaxis_title="x", yaxis_title="F(x)")
fig_cdf.show()
# Monte Carlo vs pdf
n = 80_000
s = semicircular_rvs(n, loc=mu, scale=R, rng=rng)
fig = go.Figure()
fig.add_trace(
go.Histogram(
x=s,
nbinsx=70,
histnorm="probability density",
name="Monte Carlo",
opacity=0.6,
)
)
fig.add_trace(go.Scatter(x=x, y=pdf, name="theory pdf", line=dict(color="black")))
fig.update_layout(
title="Monte Carlo samples vs semicircular pdf",
xaxis_title="x",
yaxis_title="density",
barmode="overlay",
)
fig.show()
9) SciPy Integration#
SciPy provides the distribution as scipy.stats.semicircular.
stats.semicircular.pdf(x, loc=mu, scale=R)stats.semicircular.cdf(x, loc=mu, scale=R)stats.semicircular.rvs(size=..., loc=mu, scale=R, random_state=...)stats.semicircular.fit(data)estimateslocandscale
We’ll verify that our NumPy formulas match SciPy.
mu, R = 1.25, 0.8
x = np.linspace(mu - R, mu + R, 800)
pdf_ours = semicircular_pdf(x, loc=mu, scale=R)
pdf_scipy = stats.semicircular.pdf(x, loc=mu, scale=R)
cdf_ours = semicircular_cdf(x, loc=mu, scale=R)
cdf_scipy = stats.semicircular.cdf(x, loc=mu, scale=R)
print("max |pdf diff|:", np.max(np.abs(pdf_ours - pdf_scipy)))
print("max |cdf diff|:", np.max(np.abs(cdf_ours - cdf_scipy)))
# Sampling + fitting
true_loc, true_scale = -0.5, 1.7
sample = stats.semicircular.rvs(loc=true_loc, scale=true_scale, size=2500, random_state=rng)
loc_hat, scale_hat = stats.semicircular.fit(sample)
print("true loc, scale:", true_loc, true_scale)
print("fit loc, scale:", loc_hat, scale_hat)
max |pdf diff|: 2.220446049250313e-16
max |cdf diff|: 1.1102230246251565e-16
true loc, scale: -0.5 1.7
fit loc, scale: -0.5069591823846296 1.698045426706722
10) Statistical Use Cases#
10.1 Hypothesis testing (goodness-of-fit)#
When you have a bounded, symmetric dataset, the semicircular distribution can be a candidate model.
A standard diagnostic is a goodness-of-fit test such as the Kolmogorov–Smirnov (KS) test.
Below we compare two datasets on the same support:
data generated from the semicircular model (should look consistent)
data generated from a uniform model (should be inconsistent)
(As always: if you estimate parameters from the same data you test, the raw KS p-values are not exact; a parametric bootstrap is a better choice.)
mu, R = 0.2, 1.3
n = 600
x_semi = stats.semicircular.rvs(loc=mu, scale=R, size=n, random_state=rng)
x_unif = stats.uniform.rvs(loc=mu - R, scale=2 * R, size=n, random_state=rng)
ks_semi = stats.kstest(x_semi, "semicircular", args=(mu, R))
ks_unif = stats.kstest(x_unif, "semicircular", args=(mu, R))
print("KS (true semicircular) statistic:", ks_semi.statistic, "p:", ks_semi.pvalue)
print("KS (uniform) statistic:", ks_unif.statistic, "p:", ks_unif.pvalue)
x_grid = np.linspace(mu - R, mu + R, 800)
fig = go.Figure()
fig.add_trace(
go.Histogram(
x=x_semi,
nbinsx=55,
histnorm="probability density",
name="data: semicircular",
opacity=0.55,
)
)
fig.add_trace(
go.Histogram(
x=x_unif,
nbinsx=55,
histnorm="probability density",
name="data: uniform",
opacity=0.40,
)
)
fig.add_trace(
go.Scatter(
x=x_grid,
y=stats.semicircular.pdf(x_grid, loc=mu, scale=R),
name="semicircular pdf (H0)",
line=dict(color="black"),
)
)
fig.update_layout(
title="Goodness-of-fit intuition: semicircular vs uniform data",
xaxis_title="x",
yaxis_title="density",
barmode="overlay",
)
fig.show()
KS (true semicircular) statistic: 0.04323935373187415 p: 0.20588329780325731
KS (uniform) statistic: 0.06402741241309892 p: 0.013958570716509898
10.2 Bayesian modeling (a bounded prior)#
Suppose a parameter (\theta) is known a priori to lie in ([\mu-R,\mu+R]), and you want a prior that:
favors values near (\mu)
assigns 0 density at the endpoints
A semicircular prior does exactly that.
There’s no general conjugacy here, but for simple likelihoods we can compute posteriors on a grid.
Below we compare a uniform prior vs a semicircular prior for a normal-location model.
def _normalize_on_grid(grid, log_unnorm_density):
log_unnorm_density = np.asarray(log_unnorm_density, dtype=float)
log_unnorm_density = log_unnorm_density - np.max(log_unnorm_density)
un = np.exp(log_unnorm_density)
z = np.trapz(un, grid)
return un / z
# Model: y_i | theta ~ Normal(theta, sigma_obs^2)
mu, R = 0.0, 1.0
sigma_obs = 0.35
n = 25
theta_true = 0.25
y = theta_true + rng.normal(scale=sigma_obs, size=n)
# Grid on the support
grid = np.linspace(mu - R, mu + R, 2001)
# Log-likelihood up to a constant
loglike = -0.5 * np.sum(((y[:, None] - grid[None, :]) / sigma_obs) ** 2, axis=0)
# Priors
logprior_uniform = np.zeros_like(grid)
logprior_uniform[(grid < mu - R) | (grid > mu + R)] = -np.inf
logprior_semi = semicircular_logpdf(grid, loc=mu, scale=R)
post_uniform = _normalize_on_grid(grid, loglike + logprior_uniform)
post_semi = _normalize_on_grid(grid, loglike + logprior_semi)
mean_uniform = np.trapz(grid * post_uniform, grid)
mean_semi = np.trapz(grid * post_semi, grid)
fig = go.Figure()
fig.add_trace(go.Scatter(x=grid, y=post_uniform, name=f"posterior (uniform prior), mean={mean_uniform:.3f}"))
fig.add_trace(go.Scatter(x=grid, y=post_semi, name=f"posterior (semicircular prior), mean={mean_semi:.3f}"))
fig.add_vline(x=theta_true, line=dict(color="black", dash="dot"), annotation_text="true θ")
fig.update_layout(
title="Bayesian example: uniform vs semicircular prior on a bounded θ",
xaxis_title="θ",
yaxis_title="posterior density",
)
fig.show()
/tmp/ipykernel_1034072/1049027462.py:46: RuntimeWarning:
divide by zero encountered in log1p
10.3 Generative modeling (Wigner semicircle law)#
A famous appearance of the semicircular distribution is in the eigenvalues of large random symmetric matrices.
Let (W) be an (n\times n) Wigner matrix: symmetric with i.i.d. entries in the upper triangle, scaled by (1/\sqrt{n}).
As (n\to\infty), the empirical distribution of eigenvalues converges to a semicircle (radius (2) in the common normalization).
We’ll simulate a moderate (n) and compare the eigenvalue histogram to Semicircular(loc=0, scale=2).
n = 250
A = rng.normal(size=(n, n))
W = np.triu(A)
W = W + W.T - np.diag(np.diag(W))
W = W / np.sqrt(n)
eigvals = np.linalg.eigvalsh(W)
x = np.linspace(-2.2, 2.2, 900)
pdf = stats.semicircular.pdf(x, loc=0.0, scale=2.0)
fig = go.Figure()
fig.add_trace(
go.Histogram(
x=eigvals,
nbinsx=60,
histnorm="probability density",
name="eigenvalues",
opacity=0.65,
)
)
fig.add_trace(go.Scatter(x=x, y=pdf, name="semicircle pdf (scale=2)", line=dict(color="black")))
fig.update_layout(
title="Wigner semicircle law (simulation)",
xaxis_title="eigenvalue",
yaxis_title="density",
barmode="overlay",
)
fig.show()
print("eigenvalue range:", (eigvals.min(), eigvals.max()))
eigenvalue range: (-1.930360346008193, 1.9533443905131391)
11) Pitfalls#
Invalid parameters:
scalemust be strictly positive.Support constraints: the pdf is 0 outside ([\mu-R,\mu+R]). In log space, that’s
-inf.Boundary behavior: the pdf goes to 0 at (\mu\pm R); the log-pdf goes to
-inf. Data near boundaries can make optimization/fitting numerically delicate.Numerical rounding:
In code, quantities like (1-y^2) can become slightly negative near (|y|=1); use
np.clipbeforesqrt.For CDF formulas using
arcsin(y), make sure (y\in[-1,1]) (again: clipping helps).
Goodness-of-fit with fitted parameters: if you estimate ((\mu,R)) from data and then run a KS test, the classical p-value is optimistic; use bootstrap if you need calibrated inference.
12) Summary#
The semicircular distribution is a bounded, symmetric continuous distribution with density shaped like a semicircle.
In ((\mu,R)) form: support ([\mu-R,\mu+R]), mean (\mu), variance (R^2/4), skewness 0, excess kurtosis (-1), entropy (\log(\pi R)-\tfrac12).
The characteristic function and MGF use Bessel functions: (\varphi(t)=e^{i\mu t}\tfrac{2J_1(Rt)}{Rt}).
A simple exact sampler uses the uniform disk construction: sample a point uniformly in a disk and take its x-coordinate.
The Wigner semicircle law explains why this distribution shows up in random matrix eigenvalues.